# 8K Long-Context Processing
Nemotron H 56B Base 8K
Other
Nemotron-H-56B-Base-8K is a large language model developed by NVIDIA, featuring a hybrid Mamba-Transformer architecture, supporting 8K context length and multilingual text generation.
Large Language Model
Transformers Supports Multiple Languages

N
nvidia
904
26
Nemotron H 47B Base 8K
Other
The NVIDIA Nemotron-H-47B-Base-8K is a large language model (LLM) developed by NVIDIA, designed for text completion tasks. It features a hybrid architecture primarily composed of Mamba-2 and MLP layers, with only five attention layers.
Large Language Model
Transformers Supports Multiple Languages

N
nvidia
1,242
16
Nemotron H 8B Base 8K
Other
The NVIDIA Nemotron-H-8B-Base-8K is a large language model (LLM) developed by NVIDIA, designed to generate completions for given text fragments. The model adopts a hybrid architecture primarily composed of Mamba-2 and MLP layers, incorporating only four attention layers. It supports a context length of 8K and covers multiple languages including English, German, Spanish, French, Italian, Korean, Portuguese, Russian, Japanese, and Chinese.
Large Language Model
Transformers Supports Multiple Languages

N
nvidia
5,437
38
Phi 3 Small 8k Instruct
MIT
Phi-3-Small-8K-Instruct is a 7B-parameter lightweight open-source model focused on high-quality reasoning capabilities, supporting 8K context length, suitable for commercial and research applications in English environments.
Large Language Model
Transformers Other

P
microsoft
22.92k
165
Featured Recommended AI Models